ai ml medical device
Health Canada paving the way for more AI/ML medical devices
Since 2018, Health Canada has undertaken an initiative to adapt its regulatory approach to better support digital health technologies, specifically medical devices. Key focus areas include artificial intelligence, software as a medical device, cybersecurity, medical device interoperability, wireless medical devices, mobile medical apps and telemedicine. To meet this goal, Health Canada established the Digital Health Division under the Medical Devices Bureau and has been increasing its efforts to build in-house expertise. On October 27, 2021, Health Canada, the US Food and Drug Administration (FDA), and the United Kingdom's Medicines and Healthcare Products Regulatory Agency (MHRA) jointly published the Good Machine Learning Practice for Medical Device Development: Guiding Principles. The document consists of 10 guiding principles to help promote safe, effective, and high-quality use of artificial intelligence and machine learning (AI/ML) in medical devices.
- North America > Canada (1.00)
- North America > United States (0.58)
- Europe > United Kingdom (0.26)
FDA, global peers create guiding principles for AI/ML medical devices
This year may go down as the point that regulators started to try to get a handle on the use of AI and ML in medical devices. Over the past 10 months, FDA has issued an AI/ML action plan for regulating the technology in medical devices, the European Commission has released contentious plans for the entire AI field and the U.K. has proposed an overhaul of how it regulates AI as a medical device. Now, the U.S. and U.K. have begun working together on a global initiative. Working with their peers at Health Canada, officials at FDA and the U.K.'s MHRA have laid out the following guiding principles: Collectively, the principles cover concerns about the possible biases of algorithms, their applicability to clinical practice and the potential for them to evolve as they are used in the real world. FDA and its collaborators have expanded on each of the principles, explaining, for example, that developers need to have "appropriate controls in place to manage risks of overfitting, unintended bias or degradation of the model" when their systems are "periodically or continually trained after deployment."